Customize existing Sample Applications
Adapt and enhance existing sample applications to address specific use case requirements. This section guides you through downloading, building, and customizing source code using the Qualcomm® Intelligent Multimedia Product (QIMP) SDK on a RUBIK Pi 3 target device running Canonical Ubuntu—providing full control over application behavior and enabling performance optimization across runtime environments.
Prebuilt applications are great for quick evaluation, but customizing them allows you to:
- Integrate your own models, media, or logic
- Optimize performance for specific runtime targets (CPU, GPU, NPU)
- Add new features or modify existing ones to match your use case
- Experiment with different pipeline configurations using GStreamer
- Build production-ready applications starting from a working baseline
Prerequisites
- Ubuntu OS should be flashed
- Terminal access with appropriate permissions
- If you haven’t previously installed the PPA packages, please run the following steps to install them.
git clone -b ubuntu_setup --single-branch https://github.com/rubikpi-ai/rubikpi-script.git
 cd rubikpi-script
 ./install_ppa_pkgs.sh
Build from Source
Follow the steps below to download, configure, and compile the sample application source code. This allows you to modify application behavior and integrate your own logic as needed.
1️⃣ Install the following packages to download source code.
sudo apt-add-repository -s ppa:ubuntu-qcom-iot/qcom-ppa
sudo apt-get install adreno-dev
sudo apt-get install gstreamer1.0-qcom-sample-apps-utils-dev
2️⃣ Build the dependencies
Run the command to get the plugins needed for source code compilation
sudo apt build-dep gst-plugins-qti-oss
3️⃣ Download source code
Download the sample application source code
cd /home/ubuntu
sudo apt source gst-plugins-qti-oss
4️⃣ Sample application code walkthrough
Consider the gst-ai-usb-camera-app for code walkthrough. This is a GStreamer-based C application developed by Qualcomm to demonstrate how to use a USB camera for different purposes:
- Show live video on a display
- Save video to a file
- Stream video over RTSP
- Run object detection using AI models
The source code of gst-ai-usb-camera-app available in
cd gst-plugins-qti-oss/gst-sample-apps/gst-ai-usb-camera-app
Details
a: Header Files and Constants
#include <glib-unix.h>
#include <stdio.h>
#include <gst/gst.h>
#include <linux/videodev2.h>
#include <sys/ioctl.h>
#include <json-glib/json-glib.h>
These provide support for:
- GLib: Utility functions and main loop
- GStreamer: Multimedia framework
- Video4Linux2 (V4L2): Accessing USB camera
- JSON-GLib: Reading configuration from JSON
Then we define default values:
#define DEFAULT_WIDTH 1280
#define DEFAULT_HEIGHT 720
#define DEFAULT_FRAMERATE 30
#define DEFAULT_OUTPUT_FILENAME "/etc/media/video.mp4"
#define DEFAULT_IP "127.0.0.1"
#define DEFAULT_PORT "8900"
These are used if the user doesn't provide custom settings.
b. Application Context Structures
GstCameraAppCtx
This structure holds the state of the application:
struct GstCameraAppCtx {
    GstElement *pipeline;
    GMainLoop *mloop;
    gchar *output_file;
    gchar *ip_address;
    gchar *port_num;
    gchar *enable_ml;
    gchar dev_video[16];
    enum GstSinkType sinktype;
    enum GstVideoFormat video_format;
    gint width;
    gint height;
    gint framerate;
};
GstAppOptions
This structure holds user-defined options from the config file:
typedef struct {
    gchar *file_path;
    gchar *model_path;
    gchar *labels_path;
    gchar *constants;
    gchar **snpe_layers;
    GstCameraSourceType camera_type;
    GstModelType model_type;
    GstYoloModelType yolo_model_type;
    gdouble threshold;
    gint delegate_type;
    gint snpe_layer_count;
    gboolean use_cpu;
    gboolean use_gpu;
    gboolean use_dsp;
} GstAppOptions;
c. Reading Configuration from JSON
The function parse_json() reads the config file and sets values in appctx and options.
Example config:
{
  "width": 1280,
  "height": 720,
  "framerate": 30,
  "output": "waylandsink",
  "enable-object-detection": "TRUE",
  "yolo-model-type": "yolov8",
  "ml-framework": "tflite"
}
Code snippet:
if (json_object_has_member(root_obj, "width")) {
    appctx->width = json_object_get_int_member(root_obj, "width");
}
This sets the camera resolution width from the config file.
d. Finding the USB Camera
Function: find_usb_camera_node()
This function loops through /dev/video0 to /dev/video63 to find a valid USB camera.
while (idx < MAX_VID_DEV_CNT) {
    snprintf(appctx->dev_video, sizeof(appctx->dev_video), "/dev/video%d", idx);
    mFd = open(appctx->dev_video, O_RDWR);
    ioctl(mFd, VIDIOC_QUERYCAP, &v2cap);
    if (strcmp((const char *)v2cap.driver, "uvcvideo") == 0) {
        break;
    }
    idx++;
}
e. Creating the GStreamer Pipeline
Function: create_preview_pipe()
This function builds a pipeline based on the output type (display, file, or RTSP).
Example: Live Preview
v4l2src → capsfilter → waylandsink
Code:
v4l2src = gst_element_factory_make("v4l2src", "v4l2src");
capsfilter = gst_element_factory_make("capsfilter", "capsfilter");
waylandsink = gst_element_factory_make("waylandsink", "waylandsink");
Example: Save to File
v4l2src → capsfilter → qtivtransform → v4l2h264enc → h264parse → filesink
Code:
filesink = gst_element_factory_make("filesink", "filesink");
v4l2h264enc = gst_element_factory_make("v4l2h264enc", "v4l2h264enc");
h264parse = gst_element_factory_make("h264parse", "h264parse");
Example: RTSP Streaming
v4l2src → capsfilter → qtivtransform → v4l2h264enc → h264parse → qtirtspbin
f. Object Detection Pipeline
Function: create_pipe()
This builds a more complex pipeline for AI-based object detection.
Pipeline Flow:
v4l2src → capsfilter → tee → qtivcomposer → waylandsink
Code:
qtimlvconverter = gst_element_factory_make("qtimlvconverter", "qtimlvconverter");
qtimlelement = gst_element_factory_make("qtimltflite", "qtimlelement");
qtimlvdetection = gst_element_factory_make("qtimlvdetection", "qtimlvdetection");
qtivcomposer = gst_element_factory_make("qtivcomposer", "qtivcomposer");
These plugins handle:
Preprocessing: qtimlvconverter
Inference: qtimltflite, qtimlsnpe, or qtimlqnn
Postprocessing: qtimlvdetection
Overlay: qtivcomposer
g. Main Function
This is where everything starts:
int main(int argc, char *argv[]) {
    appctx = gst_app_context_new();
    parse_json(config_file, &options, appctx);
    find_usb_camera_node(appctx);
    create_pipe(appctx, &options);
    gst_element_set_state(pipeline, GST_STATE_PAUSED);
    g_main_loop_run(appctx->mloop);
}
It:
Initializes the app context
Reads the config file
Finds the USB camera
Builds the pipeline
Runs the main loop
h. User can updated the sample applications based on
- Above explainations
- Getting details of the image format and size supported by connected USB camera (through yavta) from Get image format and size and modify the json or source code.
5️⃣ Compile Sample app utils to get the latest header
cd gst-plugins-qti-oss/gst-sample-apps/gst-sample-apps-utils
mkdir build; cd build
cmake \
   -DGST_VERSION_REQUIRED=1.20.1 \
   -DSYSROOT_INCDIR=/usr/include \
   -DSYSROOT_LIBDIR=/usr/lib \
   -DGST_PLUGINS_QTI_OSS_INSTALL_BINDIR=/usr/bin \
   -DGST_PLUGINS_QTI_OSS_INSTALL_CONFIG=/etc/configs \
   -DENABLE_CAMERA=TRUE \
   -DENABLE_VIDEO_ENCODE=TRUE \
   -DENABLE_VIDEO_DECODE=TRUE \
   -DENABLE_DISPLAY=TRUE \
   -DENABLE_ML=TRUE \
   -DENABLE_AUDIO=TRUE \
   -DCAMERA_SERVICE=LECAM \
   -DGST_PLUGINS_QTI_OSS_INSTALL_INCDIR=/usr/include \
   ..
make
make install
6️⃣ Navigate to the sample app you want to modify, apply your changes, and compile it. This example shows how to build the object detection app.
cd gst-plugins-qti-oss/gst-sample-apps/gst-ai-object-detection
mkdir build; cd build
cmake \
   -DGST_VERSION_REQUIRED=1.20.1 \
   -DSYSROOT_INCDIR=/usr/include \
   -DSYSROOT_LIBDIR=/usr/lib \
   -DGST_PLUGINS_QTI_OSS_INSTALL_BINDIR=/usr/bin \
   -DGST_PLUGINS_QTI_OSS_INSTALL_CONFIG=/etc/configs \
   -DENABLE_CAMERA=TRUE \
   -DENABLE_VIDEO_ENCODE=TRUE \
   -DENABLE_VIDEO_DECODE=TRUE \
   -DENABLE_DISPLAY=TRUE \
   -DENABLE_ML=TRUE \
   -DENABLE_AUDIO=TRUE \
   -DCAMERA_SERVICE=LECAM \
   -DGST_PLUGINS_QTI_OSS_INSTALL_INCDIR=/usr/include \
   ..
make
make install
Every sample app needs to be compiled individually
7: Run the compiled sample application
gst-ai-usb-camera-app
To display the available help options, run the following command in the SSH shell:
gst-ai-usb-camera-app -h
To stop the use case, press CTRL + C
Reference Documentation:
AI developer workflow - Ubuntu on Qualcomm® IoT Platforms Documentation